-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hand Me That Task #546
Hand Me That Task #546
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the structure of the test a lot now. My main concern is that setup is too complicated for the scenario slots. I think it needs to be simplified.
Maybe it's something like, each clarifying question has a penalty associated with it, but it's a smaller penalty than a wrong guess? Maybe like 50 points per clarifying question?
|
Sounds good to me |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the test but I find it hard to understand and very exploitable
My major concern is that color thingy. I'm against banning color descriptors since we have categories. I can have the fuchsia IKEA cup next to a red paprika and a red apple in a group. Asking red is kinda pointless and would provide no real information and automatically discards 400pts. Instead, the robot may (blindly) ask if the object is fruit (or the category) and in a lucky strike finds the apple. The same applies for center, left, and right in groups of 3 objects. Is very exploitable.
For this reason I'm suggesting a rule to allow the operator to reply a dunno 'bout dat
when the referee finds that the robot is discarding options blindly. I would also suggest that the operator only answers yes
or no
but I don't want to clamp interaction using NLP. Some innovative solutions might come from there.
So far my 2 cents
I think that my updates address all of your concerns. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the changes. Task seems pretty good to me now.
Requested one change. See above.
tasks/HandMeThat.tex
Outdated
@@ -47,8 +51,9 @@ \subsection{Additional rules and remarks} | |||
\begin{enumerate} | |||
\item \textbf{Keep going:} The robot should keep trying to determine the referred to object until they score or run out of time. | |||
\item \textbf{Pass:} The robot may say pass to try the next object. | |||
\item \textbf{Touch:} The robot may ask the referee may pick up or touch the object by saying \textit{pick up} or \textit{touch}. If the robot does this, identifying the object is worth only 100 points. | |||
\item \textbf{Oops:} Incorrect guesses reduce the value of the correct guess by 200 points, each, but cannot make the value of the correct guess go below 100 points. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should not get points past the fourth guess. Teams could just have their robot literally guess all 30 possible objects and score 100 points.
Personally, as a referee I would not give points to a robot that is just rattling off guesses anyway but I still think it should be clear in the rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that the rules tweak that I made fixes this. Let me know what you think.
Looks good |
This is an attempt to address everyone's concerns from #528